skip to main content


Search for: All records

Creators/Authors contains: "Liu, Boyang"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Density estimation is a widely used method to perform unsupervised anomaly detection. By learning the density function, data points with relatively low densities are classified as anomalies. Unfortunately, the presence of anomalies in training data may significantly impact the density estimation process, thereby imposing significant challenges to the use of more sophisticated density estimation methods such as those based on deep neural networks. In this work, we propose RobustRealNVP, a deep density estimation framework that enhances the robustness of flow-based density estimation methods, enabling their application to unsupervised anomaly detection. RobustRealNVP differs from existing flow-based models from two perspectives. First, RobustRealNVP discards data points with low estimated densities during optimization to prevent them from corrupting the density estimation process. Furthermore, it imposes Lipschitz regularization to the flow-based model to enforce smoothness in the estimated density function. We demonstrate the robustness of our algorithm against anomalies in training data from both theoretical and empirical perspectives. The results show that our algorithm achieves competitive results as compared to state-of-the-art unsupervised anomaly detection methods. 
    more » « less
  2. null (Ed.)

    Unsupervised anomaly detection plays a crucial role in many critical applications. Driven by the success of deep learning, recent years have witnessed growing interests in applying deep neural networks (DNNs) to anomaly detection problems. A common approach is using autoencoders to learn a feature representation for the normal observations in the data. The reconstruction error of the autoencoder is then used as outlier scores to detect the anomalies. However, due to the high complexity brought upon by the over-parameterization of DNNs, the reconstruction error of the anomalies could also be small, which hampers the effectiveness of these methods. To alleviate this problem, we propose a robust framework using collaborative autoencoders to jointly identify normal observations from the data while learning its feature representation. We investigate the theoretical properties of the framework and empirically show its outstanding performance as compared to other DNN-based methods. Our experimental results also show the resiliency of the framework to missing values compared to other baseline methods.

     
    more » « less
  3. Multilevel modeling and multi-task learning are two widely used approaches for modeling nested (multi-level) data, which contain observations that can be clustered into groups, characterized by their group-level features. Despite the similarity of the problems they address, the explicit relationship between multilevel modeling and multi-task learning has not been carefully examined. In this paper, we present a comparative analysis between the two methods to illustrate their strengths and limitations when applied to two-level nested data. We provide a detailed analysis demonstrating the equivalence of their formulations under a mild condition from an optimization perspective. We also demonstrate their limitations in terms of their predictive performance and especially, their difficulty in identifying potential cross-scale interactions between the local and group-level features when applied to datasets with either a small number of groups or limited training examples per group. To overcome these limitations, we propose a novel method for disaggregating the coarse-scale values of the group-level features in the nested data. Experimental results on both synthetic and real-world data show that the disaggregated group-level features can help enhance the prediction accuracy of the models significantly and identify the cross-scale interactions more effectively. 
    more » « less
  4. The fast and efficient synthesis of nanoparticles on flexible and lightweight substrates is increasingly critical for various medical and wearable applications. However, conventional high temperature (high-T) processes for nanoparticle synthesis are intrinsically incompatible with temperature-sensitive substrates, including textiles and paper ( i.e. low-T substrates). In this work, we report a non-contact, ‘fly-through’ method to synthesize nanoparticles on low-T substrates by rapid radiative heating under short timescales. As a demonstration, textile substrates loaded with platinum (Pt) salt precursor are rapidly heated and quenched as they move across a 2000 K heating source at a continuous production speed of 0.5 cm s −1 . The rapid radiative heating method induces the thermal decomposition of various precursor salts and nanoparticle formation, while the short duration ensures negligible change to the respective low-T substrate along with greatly improved production efficiency. The reported method can be generally applied to the synthesis of metal nanoparticles ( e.g. gold and ruthenium) on various low-T substrates ( e.g. paper). The non-contact and continuous ‘fly-through’ synthesis offers a robust and efficient way to synthesize supported nanoparticles on flexible and lightweight substrates. It is also promising for ultrafast and roll-to-roll manufacturing to enable viable applications. 
    more » « less